منابع مشابه
Distributed Markov Chains
The formal verification of large probabilistic models is challenging. Exploiting the concurrency that is often present is one way to address this problem. Here we study a class of communicating probabilistic agents in which the synchronizations determine the probability distribution for the next moves of the participating agents. The key property of this class is that the synchronizations are d...
متن کاملA Successive Lumping Procedure for a Class of Markov Chains
A class of Markov chains we call successively lumpable is specified for which it is shown that the stationary probabilities can be obtained by successively computing the stationary probabilities of a propitiously constructed sequence of Markov chains. Each of the latter chains has a(typically much) smaller state space and this yields significant computational improvements. We discuss how the re...
متن کاملThe Stationary Distributions of a Class of Markov Chains
The objective of this paper is to find the stationary distribution of a certain class of Markov chains arising in a biological population involved in a specific type of evolutionary conflict, known as Parker’s model. In a population of such players, the result of repeated, infrequent, attempted invasions using strategies from 0,1,2, , 1 m , is a Markov chain. The stationary distribution...
متن کاملLarge Deviations for a Class of Nonhomogeneous Markov Chains
Large deviation results are given for a class of perturbed non-homogeneous Markov chains on finite state space which formally includes some stochastic optimization algorithms. Specifically, let {Pn} be a sequence of transition matrices on a finite state space which converge to a limit transition matrix P. Let {Xn} be the associated non-homogeneous Markov chain where Pn controls movement from ti...
متن کاملRandomized Distributed Algorithms As Markov Chains
Distributed randomized algorithms, when they operate under a memoryless scheduler, behave as finite Markov chains: the probability at n-th step to go from a configuration x to another one y is a constant p that depends on x and y only. By Markov theory, we thus know that, no matter where the algorithm starts, the probability for the algorithm to be after n steps in a “recurrent” configuration t...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Electronic Proceedings in Theoretical Computer Science
سال: 2014
ISSN: 2075-2180
DOI: 10.4204/eptcs.156.3